36 research outputs found

    Fisher Motion Descriptor for Multiview Gait Recognition

    Get PDF
    The goal of this paper is to identify individuals by analyzing their gait. Instead of using binary silhouettes as input data (as done in many previous works) we propose and evaluate the use of motion descriptors based on densely sampled short-term trajectories. We take advantage of state-of-the-art people detectors to de ne custom spatial con gurations of the descriptors around the target person, obtaining a rich representation of the gait motion. The local motion features (described by the Divergence-Curl-Shear descriptor [1]) extracted on the di erent spatial areas of the person are combined into a single high-level gait descriptor by using the Fisher Vector encoding [2]. The proposed approach, coined Pyramidal Fisher Motion, is experimentally validated on `CASIA' dataset [3] (parts B and C), `TUM GAID' dataset [4], `CMU MoBo' dataset [5] and the recent `AVA Multiview Gait' dataset [6]. The results show that this new approach achieves state-of-the-art results in the problem of gait recognition, allowing to recognize walking people from diverse viewpoints on single and multiple camera setups, wearing di erent clothes, carrying bags, walking at diverse speeds and not limited to straight walking paths

    Evaluation of CNN architectures for gait recognition based on optical flow maps

    Get PDF
    This work targets people identification in video based on the way they walk (\ie gait) by using deep learning architectures. We explore the use of convolutional neural networks (CNN) for learning high-level descriptors from low-level motion features (\ie optical flow components). The low number of training samples for each subject and the use of a test set containing subjects different from the training ones makes the search of a good CNN architecture a challenging task.Universidad de MĂĄlaga. Campus de Excelencia Internacional AndalucĂ­a Tec

    Mixing body-parts model for 2D human pose estimation in stereo videos

    Get PDF
    This study targets 2D articulated human pose estimation (i.e. localisation of body limbs) in stereo videos. Although in recent years depth-based devices (e.g. Microsoft Kinect) have gained popularity, as they perform very well in controlled indoor environments (e.g. living rooms, operating theatres or gyms), they suffer clear problems in outdoor scenarios and, therefore, human pose estimation is still an interesting unsolved problem. The authors propose here a novel approach that is able to localise upper-body keypoints (i.e. shoulders, elbows, and wrists) in temporal sequences of stereo image pairs. The authors' method starts by locating and segmenting people in the image pairs by using disparity and appearance information. Then, a set of candidate body poses is computed for each view independently. Finally, temporal and stereo consistency is applied to estimate a final 2D pose. The authors' validate their model on three challenging datasets: `stereo human pose estimation dataset', `poses in the wild' and `INRIA 3DMovie'. The experimental results show that the authors' model not only establishes new state-of-the-art results on stereo sequences, but also brings improvements in monocular sequences

    Human interaction categorization by using audio-visual cues

    Get PDF
    Human Interaction Recognition (HIR) in uncontrolled TV video material is a very challenging problem because of the huge intra-class variability of the classes (due to large differences in the way actions are performed, lighting conditions and camera viewpoints, amongst others) as well as the existing small inter-class variability (e.g., the visual difference between hug and kiss is very subtle). Most of previous works have been focused only on visual information (i.e., image signal), thus missing an important source of information present in human interactions: the audio. So far, such approaches have not shown to be discriminative enough. This work proposes the use of Audio-Visual Bag of Words (AVBOW) as a more powerful mechanism to approach the HIR problem than the traditional Visual Bag of Words (VBOW). We show in this paper that the combined use of video and audio information yields to better classification results than video alone. Our approach has been validated in the challenging TVHID dataset showing that the proposed AVBOW provides statistically significant improvements over the VBOW employed in the related literature

    The AVA Multi-View Dataset for Gait Recognition

    Get PDF
    In this paper, we introduce a new multi-view dataset for gait recognition. The dataset was recorded in an indoor scenario, using six convergent cameras setup to produce multi-view videos, where each video depicts a walking human. Each sequence contains at least 3 complete gait cycles. The dataset contains videos of 20 walking persons with a large variety of body size, who walk along straight and curved paths. The multi-view videos have been processed to produce foreground silhouettes. To validate our dataset, we have extended some appearance-based 2D gait recognition methods to work with 3D data, obtaining very encouraging results. The dataset, as well as camera calibration information, is freely available for research purpose

    UCO physical rehabilitation: new dataset and study of human pose estimation methods on physical rehabilitation exercises

    Get PDF
    Physical rehabilitation plays a crucial role in restoring motor function following injuries or surgeries. However, the challenge of overcrowded waiting lists often hampers doctors’ ability to monitor patients’ recovery progress in person. Deep Learning methods offer a solution by enabling doctors to optimize their time with each patient and distinguish between those requiring specific attention and those making positive progress. Doctors use the flexion angle of limbs as a cue to assess a patient’s mobility level during rehabilitation. From a Computer Vision perspective, this task can be framed as automatically estimating the pose of the target body limbs in an image. The objectives of this study can be summarized as follows: (i) evaluating and comparing multiple pose estimation methods; (ii) analyzing how the subject’s position and camera viewpoint impact the estimation; and (iii) determining whether 3D estimation methods are necessary or if 2D estimation suffices for this purpose. To conduct this technical study, and due to the limited availability of public datasets related to physical rehabilitation exercises, we introduced a new dataset featuring 27 individuals performing eight diverse physical rehabilitation exercises focusing on various limbs and body positions. Each exercise was recorded using five RGB cameras capturing different viewpoints of the person. An infrared tracking system named OptiTrack was utilized to establish the ground truth positions of the joints in the limbs under study. The results, supported by statistical tests, show that not all state-of-the-art pose estimators perform equally in the presented situations (e.g., patient lying on the stretcher vs. standing). Statistical differences exist between camera viewpoints, with the frontal view being the most convenient. Additionally, the study concludes that 2D pose estimators are adequate for estimating joint angles given the selected camera viewpoints

    Stereo Pictorial Structure for 2D Articulated Human Pose Estimation

    Get PDF
    In this paper, we consider the problem of 2D human pose estimation on stereo image pairs. In particular, we aim at estimating the location, orientation and scale of upper-body parts of people detected in stereo image pairs from realistic stereo videos that can be found in the Internet. To address this task, we propose a novel pictorial structure model to exploit the stereo information included in such stereo image pairs: the Stereo Pictorial Structure (SPS). To validate our proposed model, we contribute a new annotated dataset of stereo image pairs, the Stereo Human Pose Estimation Dataset (SHPED), obtained from YouTube stereoscopic video sequences, depicting people in challenging poses and diverse indoor and outdoor scenarios. The experimental results on SHPED indicates that SPS improves on state-ofthe- art monocular models thanks to the appropriate use of the stereo informatio

    sSLAM: Speeded-Up Visual SLAM Mixing Artificial Markers and Temporary Keypoints

    Get PDF
    Environment landmarks are generally employed by visual SLAM (vSLAM) methods in the form of keypoints. However, these landmarks are unstable over time because they belong to areas that tend to change, e.g., shadows or moving objects. To solve this, some other authors have proposed the combination of keypoints and artificial markers distributed in the environment so as to facilitate the tracking process in the long run. Artificial markers are special elements (similar to beacons) that can be permanently placed in the environment to facilitate tracking. In any case, these systems keep a set of keypoints that is not likely to be reused, thus unnecessarily increasing the computing time required for tracking. This paper proposes a novel visual SLAM approach that efficiently combines keypoints and artificial markers, allowing for a substantial reduction in the computing time and memory required without noticeably degrading the tracking accuracy. In the first stage, our system creates a map of the environment using both keypoints and artificial markers, but once the map is created, the keypoints are removed and only the markers are kept. Thus, our map stores only long-lasting features of the environment (i.e., the markers). Then, for localization purposes, our algorithm uses the marker information along with temporary keypoints created just in the time of tracking, which are removed after a while. Since our algorithm keeps only a small subset of recent keypoints, it is faster than the state-of-the-art vSLAM approaches. The experimental results show that our proposed sSLAM compares favorably with ORB-SLAM2, ORB-SLAM3, OpenVSLAM and UcoSLAM in terms of speed, without statistically significant differences in accuracy

    sSLAM: Speeded-Up Visual SLAM Mixing Artificial Markers and Temporary Keypoints

    Get PDF
    Environment landmarks are generally employed by visual SLAM (vSLAM) methods in the form of keypoints. However, these landmarks are unstable over time because they belong to areas that tend to change, e.g., shadows or moving objects. To solve this, some other authors have proposed the combination of keypoints and artificial markers distributed in the environment so as to facilitate the tracking process in the long run. Artificial markers are special elements (similar to beacons) that can be permanently placed in the environment to facilitate tracking. In any case, these systems keep a set of keypoints that is not likely to be reused, thus unnecessarily increasing the computing time required for tracking. This paper proposes a novel visual SLAM approach that efficiently combines keypoints and artificial markers, allowing for a substantial reduction in the computing time and memory required without noticeably degrading the tracking accuracy. In the first stage, our system creates a map of the environment using both keypoints and artificial markers, but once the map is created, the keypoints are removed and only the markers are kept. Thus, our map stores only long-lasting features of the environment (i.e., the markers). Then, for localization purposes, our algorithm uses the marker information along with temporary keypoints created just in the time of tracking, which are removed after a while. Since our algorithm keeps only a small subset of recent keypoints, it is faster than the state-of-the-art vSLAM approaches. The experimental results show that our proposed sSLAM compares favorably with ORB-SLAM2, ORB-SLAM3, OpenVSLAM and UcoSLAM in terms of speed, without statistically significant differences in accuracy.This research was funded by the project PID2019-103871GB-I00 of the Spanish Ministry of Economy, Industry and Competitiveness, FEDER, Project 1380047-F UCOFEDER-2021 of Andalusia and by the European Union–NextGeneration EU for requalification of Spanish University System 2021–2023

    Keypoint descriptor fusion with Dempster-Shafer Theory

    Get PDF
    Keypoint matching is the task of accurately nding the location of a scene point in two images. Many keypoint descriptors have been proposed in the literature aiming at providing robustness against scale, translation and rotation transformations, each having advantages and disadvantages. This paper proposes a novel approach to fuse the information from multiple keypoint descriptors using Dempster-Shafer Theory of evidence [1], which has proven particularly e cient in combining sources of information providing incomplete, imprecise, biased, and con ictive knowledge. The matching results of each descriptor are transformed into an evidence distribution on which a con dence factor is computed making use of its entropy. Then, the evidence distributions are fused using Dempster-Shafer Theory (DST), considering its con dence. As result of the fusion, a new evidence distribution that improves the result of the best descriptor is obtained. Our method has been tested with SIFT, SURF, ORB, BRISK and FREAK descriptors using all possible combinations of them. Results on the Oxford keypoint dataset [2] shows that the proposed approach obtains an improvement of up to 10% compared to the best one (FREAK)
    corecore